专利摘要:
This system comprises a drone and a remote station with virtual reality glasses rendering images transmitted from the drone, and provided with means for detecting changes in the orientation of the head of the user. The drone generates a "point of view" image (P'1) and a "bird's eye" image (P'2) whose field is wider and whose definition is lower. When the line of sight of the "viewpoint" image is changed in response to changes in position of the user's head, the station generates locally during the movement of the user's head a combination of the images "point of view" and "bird's-eye", with contours (CO ') adjusted according to the changes of position detected by the detection means.
公开号:FR3028767A1
申请号:FR1461501
申请日:2014-11-26
公开日:2016-05-27
发明作者:Henri Seydoux;Gaspard Florentz
申请人:Parrot SA;
IPC主号:
专利说明:

[0001] The invention relates to remote controlled motorized devices, hereinafter generally referred to as "drones". They may be flying drones, including rotary wing drones such as helicopters, quadcopters and the like. However, the invention is not limited to driving and exchanging data with flying devices; it applies as well to mobile devices operating on the ground under the control of a remote user, the term "drone" to be understood in its most general sense. A typical example of a flying drone is the AR.Drone 2.0 or the Bebop from Parrot SA, Paris, France, which are quadcopters equipped with a series of sensors (accelerometers, three-axis gyrometers, altimeters), a camera frontal capturing an image of the scene towards which the drone is directed, and a vertical aiming camera capturing an image of the terrain overflown.
[0002] The documents WO 2010/061099 A2 and EP 2 364 757 A1 (Parrot SA) describe such a drone as well as its driving principle by means of an integrated touch-screen device and accelerometer, for example an iPhone-type smartphone or a tablet type iPad (registered trademarks). These devices incorporate the various control devices necessary for the detection of pilot commands and the bidirectional exchange of data with the drone via a wireless LAN type wireless LAN (IEEE 802.11) or Bluetooth. They are further provided with a touch screen displaying the image captured by the front camera of the drone, with a number of symbols superimposed enabling the activation of commands by simply touching the user's finger on this screen. tile. The front camera of the drone is particularly usable for driving in "immersive mode" or FPV (First-Person View), that is to say where the user uses the image of the camera in the same way than if it were itself on board the drone, it can also be used to capture sequences of images of a scene to which the drone is moving, so the user can use the drone the same way as a camera or camcorder that, instead of being held by hand, would be carried by the drone.The images collected can be recorded, broadcast, posted on websites, sent to other Internet users, shared on social networks, etc. To film the flight, it is first necessary to have a camera that can be adjusted and stabilized. the Gimbal, or provide an electronic image stabilization, however, such image feedback stabilized from the drone camera is not ideal for this type of FPV control. In particular, the stabilization of the image makes it no longer reflect the pilot essential information such as the attitude of the aircraft. And more particularly in the case of a hovering drone such as a quadricopter, the conventional image return of the camera is not very interesting. Indeed, we seek with such a drone to maneuver closer to obstacles, for example to shoot a point flying between the piles of a bridge.
[0003] It is thus desirable that the pilot can in this case "glance" left and right during the shooting with the stabilized image camera. In a conventional FPV device, the pair of virtual reality glasses is equipped with a gyroscope and an accelerometer so as to take into account the movements of the user's head. The principle of conventional operation comprises the following steps: a) measuring the movement of the pilot's head; b) sending the new position of the pilot's head to the drone; c) moving from the point of view of the camera of the drone according to the new position of the pilot's head; d) acquisition of an image; e) reprocessing the image in the case where the camera has a very deforming optics such as a fisheye type lens; f) encoding of the video image; G) transmitting the video image of the drone to the ground station; h) decoding of the video image by the ground station; and i) reprojection of the video image by the ground station for each of the eyes.
[0004] 3028767 3 Each of these steps takes some time and, even with the current electronic means, it is very difficult to perform this set of steps in less than 200 ms (six images for a frame rate or framerate of 30 frames / second), mainly because of the data transfer times in steps b) and g). This latency is manifested by a lack of correlation between the movements of the driver's head and the image displayed in the virtual reality glasses, which often causes nausea for the user. One solution to avoid this problem could be to capture a 360 ° image on the drone and transfer it directly to the ground station. As a result, steps b), c) and g) above would no longer be necessary. However, transmitting drone to the ground station an entire image of the scene at 360 ° with the desired definition requires a large bandwidth, incompatible with the practical, relatively limited bit rate of the transmission channel between the drone and the aircraft. ground station. An object of the invention is to propose a camera system which makes it possible to significantly improve the piloting experience of an FPV-mode drone with virtual reality glasses, by reducing the latency observed during movements, sometimes abrupt, of the user's head. The invention proposes for this purpose an immersion drone piloting system comprising, in a manner known per se, a drone equipped with shooting means and a ground station comprising: virtual reality glasses rendering images. taken using the means of shooting and transmitted from the drone by wireless communication means; means for detecting changes in the orientation of the head of a user wearing the glasses; as well as ground graphic processing means capable of generating the restored images.
[0005] In a characteristic manner of the invention, this system comprises: means for generating on board the drone a first image called "point of view" image and a second image called "bird's eye" image whose field is wider and whose angular definition is lower than the image point of view, and for transmitting these images to the means of graphical processing on the ground; and means provided in the drone for modifying the line of sight of the "point of view" image in response to changes in position of the user's head detected by the detection means and transmitted to the user. drone via the wireless communication means. The graphic ground processing means are capable of generating locally during a movement of the user's head, by a combination of the usual "point of view" and "bird's-eye" images present in the ground station. images to restore contour adjusted according to the position changes detected by the detection means. Note that, concretely, the two images "point of view" and "bird flight" can be separate images, but also, and equivalently, be merged into one and the same overall image whose angular resolution is larger in a "point of view" area (which is equivalent to the "viewpoint" image) than in the rest of the overall image (which is equivalent to the "bird's eye" image).
[0006] The system also optionally includes the following advantageous characteristics, taken individually or in any combination which those skilled in the art will recognize as technically compatible: the ground graphic processing means are capable of performing a crusting of the current "point of view" image in the current "bird's eye" image and to apply variable cropping to the image thus obtained; - The shooting means comprise a set of wide-field shooting cameras with different axes of sight; The shooting means comprise cameras of different definitions for the "point of view" images and the "bird's eye" images. the shooting means comprise a common set of cameras having all the same definition, and different definition image generation circuits from the common set of 30 cameras; the system comprises a set of complementary field cameras covering all the directions in a horizontal plane; the system comprises a first camera whose axis of view is arranged along a main axis of the drone, and two lower definition cameras whose aiming axes are respectively oriented to the left and to the right with respect to the main axis; the two cameras of lower definitions have complementary fields covering all the directions in a horizontal plane; at least some cameras have fisheye-type optics, and there are provided means for correcting the distortions generated by this type of optics.
[0007] An embodiment of the invention will now be described with reference to the appended drawings in which the same references designate identical or functionally similar elements from one figure to another. Figure 1 schematically illustrates an arrangement of three cameras for use in the system of the invention. Figure 2 illustrates, for each camera, the area of its sensor that is actually exploited. Figure 3 schematically illustrates two types of images transmitted by the set of cameras of Figures 1 and 2.
[0008] Figure 4 schematically illustrates the combination of parts of the two images when in the present example it is detected that the FPV wearer turns his head to the left. Figures 5A and 5B illustrate two combinations of image portions when detecting a rapid rotation of the head to the left.
[0009] Figures 6A to 6C show a real picture and, with reference to Figures 6B and 6C, two combinations of images according to the principle of Figure 4 for two different positions of the user's head. Figure 7 is a block diagram of the different functionalities implemented in an electronic circuit board embedded in the drone.
[0010] FIG. 8 is a block diagram of the various functionalities implemented in an electrical ground circuit associated with the spectacles. Figures 9A-9C schematically illustrate other camera arrangements that may be used in the system of the present invention. A "immersive mode" or "FPV" shooting system according to the present invention comprises a drone equipped with a set of cameras, and a ground equipment communicating via a wireless range link. appropriate with the drone and including virtual reality glasses, provided with means of restitution in front of the user's eyes of images giving it the sensation of flying on board the drone, in the most realistic way possible. The cameras fitted to the drone are wide-field cameras, such as cameras with fisheye type optics, that is to say equipped with a hemispherical field lens covering a field of about 180 °. Embodiment of the invention, and with reference to Figure 1, the drone 10 has three cameras 110, 121 and 122. A first camera or main camera 110 has a fisheye lens type whose axis of A10 referred is directed worm It has a large sensor with a large number of pixels, typically 8 to 20 Mpixels with current technologies. As illustrated in FIG. 2, the fisheye optics of the main camera 110 generates a circular contour image IM10 for the main camera, the boundary circle of which preferably extends laterally beyond the rectangular boundaries of the sensor C10 so as to promote the number of exposed pixels, optimize the occupancy of the surface of the sensor, which is generally in the ratio 4/3, and promote shooting in a direction from top to bottom.
[0011] The drone further has two auxiliary cameras 121, 122 pointing in the present example on each of the two sides of the drone as shown in Figure 1, with viewing angles A21, A22 collinear and opposite. These cameras are also equipped with fisheye lenses, but have smaller C21, C22 sensors with 30 smaller lenses. For these cameras, as also illustrated in FIG. 2, the image circle, respectively 1M21, 1M22, is entirely contained in the range of the respective sensor C21, C22. Typically, for sensors C21, C22 sensors of a resolution of 3 Mpixels are used with current technologies of the type used for full HD (full HD) cameras. In the present example, about 1.8 Mpixels are effectively illuminated by the respective fisheye lens. As will be seen below, the two auxiliary cameras can have any orientation, as long as their respective axes of view are essentially opposite, one goal in particular being that the join between the two images, each covering the order of a half-sphere, is located at the least awkward place for vision and / or for the treatments to be performed. It can also provide not to cover the angular zone located near the vertical upwards, the least useful. For example, the portion of the image above 70 ° upward may be neglected. From this set of cameras, the electronic circuitry of the drone transmits to the station on the ground two images intended to be processed as we will see for a rendering combined with the visual displays of a pair of virtual reality glasses. A first image P1, called the "point of view" image, is that taken by the camera 110, reflecting the point of view of a virtual pilot of the drone, and a second image P2, called a "bird's eye" image, is that, combined with the on-board electronic circuitry, coming from the two side cameras 121 20 and 122. The first image is a full definition image corresponding to a limited field of view, while the second image is a lower resolution image on a field of view of 360 ° horizontally, and 360 ° or slightly less vertically. It will be noted that the focal lengths of the three cameras are all identical so as to be able to superimpose the two images together without anomalies. FIG. 3 reflects, by the size of the images P1 and P2, the congestion linked to the pixel angle resolution of these images, the image P1 being more cumbersome in terms of bandwidth than the image P2, represented consequently with a smaller size. Note that the invention may be advantageous in a case where a VGA image (640x480) is used for the "point of view" zone of about 90 ° x 90 ° of field of view, and another VGA image for everything else (360 ° x 360 ° field of view).
[0012] The different steps implemented in a system according to the invention will now be described in order to obtain in the virtual reality glasses an image reflecting an FPV-type experiment.
[0013] Generation and transmission of the "point of view" image P1 This step implements the following operations: 1) a sensor (accelerometer or other) fitted to the spectacles measures the movements of the user's head; 2) the position information of the user's head is sent periodically to the drone circuitry from that of the ground station via the wireless communication channel, with a rate corresponding typically to that of the images to restore, for example at least 30 times per second; 3) aboard the drone, the new line of sight for the "point of view" image is defined according to said information received from the position of the head; 4) each image taken by the camera is cropped according to this line of sight to generate the image P1; 5) in the drone circuitry, this image is, if necessary, reprocessed in order to compensate for the distortion induced by the fisheye lens (such a treatment is known per se and will not be described in more detail); 6) the image thus reprocessed is encoded, preferably with compression, with an appropriate algorithm; 7) The compressed image is transmitted to the ground station via the wireless communication channel. These operations are repeated for example at a rate of 30 images per second, with each time an update of the A10 axis of view of the camera 110 and the corresponding crop.
[0014] It should be noted here that, alternatively, a camera 110 movable in response to actuators could be provided to adjust its physical axis of view in response to position information of the head. According to another variant, the point of view image may, in the same way as the "bird's eye" image, be generated by combining the images taken by two or more differently oriented cameras.
[0015] 3028767 9 Generating and Transmitting the "Bird's Eye" Image P2 This step involves the following operations: 1) two images are acquired using the two side cameras 121 and 122, 2) the two images are combined in a single image by on-board electronic circuitry; 3) the image thus combined is encoded, preferably with compression, with an appropriate algorithm; 4) The compressed image is transmitted to the ground station via the wireless communication channel. Image processing in the ground station 15 As long as the user does not move the head, the ground station displays in the glasses the P1 "point of view" images in high definition, transmitted in flux from the drone . A frame rate of 30 frames per second is possible here because no image cropping treatment taken by the camera 110 is necessary.
[0016] But when the user turns his head, the set of steps of the above process of generating and transmitting the "point of view" image requires a processing time which, as explained above , is incompatible with the image rate sought because of the cropping of the image P1 to be made for each individual image.
[0017] According to the invention, the electronic circuitry of the station constructed for the purposes of the transition (i.e. until the user's head is again fixed) transition images from the P1 and P2 images the coolest available at that time in the ground station, and coordinates virtual reality glasses. These transition images are created from fully available data on the ground station and glasses, by combination and reframing as will be seen below. Knowing that no transfer via the communication channel is necessary and that only a reframing and refresh of the display are to be performed, the latency for this operation can be extremely low.
[0018] It will be understood that the wider field P2 image from the side cameras 121, 122 could be simply cropped to the ground and restored to provide the desired transition. But this P2 image has a lower definition than the "point of view" image normally generated with the camera 110, and such a solution would cause a sharp drop in the resolution of the entire image when the image is restored. transition. To avoid this phenomenon, a combination of images is performed with the graphic processor fitted to the ground station.
[0019] Thus, Figure 4 illustrates a combination of images corresponding to the case where the user turned his head to the left. In this example, the image to be restored is obtained by combining on the one hand a fraction P1 of the image "point of view" P1 obtained by eliminating a marginal part on the right, whose width is all the greater that the angle of rotation of the head of the user is important, and completing the image thus reduced P'1, on its left edge, with a fraction P'2 of the image "bird's eye" corresponding to this area. This combination of image fractions is thus performed by simple operations of reframing and juxtaposition, knowing the angular amplitude of the lateral rotation of the head and also knowing, by construction, the correspondence between the frame of the image P1. and that of the P2 image. Thus, most of the image rendered to the user remains in high definition, and only a marginal portion of the image is in a lower resolution, and this only temporarily as long as the orientation of the head the user has not stabilized. It should be noted that the rate at which these cropping / juxtapositions are performed can be decorrelated from the rate of reception of the "point of view" images in high definition (typically 30 images per second with the current technologies), and be higher. . In particular, the virtual reality glasses are capable of rendering at a rate of 75 frames per second or more, and the aforementioned cropping / juxtaposition processes being sufficiently light in terms of graphics processor load, this rate is achievable. .
[0020] During the entire transition period, this generation of juxtaposed images will be made according to the current position of the head of the user. Thus, FIGS. 5A and 5B illustrate the simplified case where the ground station generates and displays two successive combinations of fractions, respectively P'1, P'2 and P "1, P" 2, images P1 and P2 available at ground, the fraction P "2 being wider than the fraction P'2 because between the two instants, the head of the user has continued to turn, and will now be explained with reference to FIGS. of the invention on real images.
[0021] The image of FIG. 6A corresponds to an overlay of the "point of view" image P1, at high definition, in a "bird's eye" image P2 of the same scene but of a wider field and of lesser definition. , the axes of the two images being merged. This incrustation is performed in the ground station, and in fact plays the role of the juxtaposition of images as explained in the foregoing. As long as the user keeps his head upright, the image rendered in the virtual reality glasses is the "point of view" image P1 alone, whose CO contours are illustrated in Figure 6B. When the user turns the head, in the present example to the left, the contour of the image actually returned to the user (noted here CO ') is moved to the left, and it is understood that the image restituted to the user corresponds to a combination of a fraction P'1 of the image P1, deprived of a band of a width determined to the right, and a fraction P'2 of the image P2 located immediately to the left of the image P1 (the border between these two image fractions being indicated in dashed lines in FIG. 6B). Admittedly, the left part of the image is of less high definition, but it has the advantage of being immediately available to be combined with the truncated image P1, and then displayed. This results in an imperceptible latency and an image quality quite correct for the user. Of course, the rotation information of the head being sent to the electronic circuitry of the drone to adjust accordingly the viewing axis A10 of the camera 110, and return to the ground station the new view generated by the camera 110, which It will then be restored to the glasses as long as the movements of the user's head are slow enough to allow good fluidity without having to perform the described cropping / combination treatments. Figure 7 illustrates in the form of a block diagram an implementation possibility of the present invention drone side 100.
[0022] First of all, it should be noted that instead of providing dedicated cameras for the "point of view" function and for the "bird's eye" function, the same camera can participate as well in the generation of a camera. image "point of view" only to the generation of a "bird's eye" image. In this case, the camera 110, unique in the previous embodiment described so far, can be replaced by a set of cameras. Thus Figure 7 schematically shows n cameras 110-1 to 110-n. The outputs of these cameras are connected on the one hand to a circuit 130 for generating a "point of view" image, and on the other hand to a circuit 140 for generating a "bird's eye" image. The circuit 140 only takes into account a portion of the pixels with respect to the circuit 130, to generate a lower definition image than the circuit 130. The wireless communication circuit with the ground station receives at its part receiving the absolute or relative positional information of the user's head and applying this information to the generation circuit of the "point of view" image to adjust, here by digital processing of an image combined, field wider than that actually returned, the line of sight for the image P1 to restore. The images from the circuits 130 and 140 are sent to the ground station by the transmission part 152 of the wireless communication circuit.
[0023] It will be appreciated that for carrying out the invention, these images can be sent either separately or in combination. Figure 8 schematically illustrates the elements provided in the ground station 200 to implement the invention. The receiving part 251 of the wireless communication circuit receives pictures P1 and P2 from the drone, either separately or in combination. A graphics processing circuit 210 generates from the images received an image of the type of that of Figure 6A, with the image "point of view", high definition, embedded in the image "bird's eye". A device 220 for acquiring the rotation of the user's head, which conventionally equips virtual reality glasses to which the ground station is connected, delivers this information to a graphics processing circuit 230 capable of producing the cropping (the juxtaposition being in this case already achieved by the incrustation by the circuit 210, as explained), according to the position information received glasses.
[0024] It will be noted here that in practice the circuits 210 and 230 may be constituted by the same circuit. Meanwhile, the transmitting part 252 of the wireless communication circuit of the ground station sends the drone position information of the head of the user to update the line of sight of the image "point of view" 10 P1, as explained above. Referring now to FIGS. 9A-9C, there are illustrated other possible configurations of a set of cameras equipping the drone. FIG. 9A schematically shows a "point of view" camera 110 aiming in the axis of the drone, and two "bird's eye" cameras whose aiming axes are not opposed as in the case of the Figure 1, but oblique to the left and to the right (towards the front). Admittedly, this limits the range of possibilities, especially if the user is completely head backwards, but allows to have a picture "bird's eye" of higher quality with constant bandwidth.
[0025] Figure 9B illustrates the use of only two cameras 111, 112, aimed to the left and to the right. In this case, most "point of view" images will come from a combination of the images taken by the two cameras. As a variant, only two cameras can be provided, respectively forwards and backwards.
[0026] Finally, FIG. 9C illustrates the use of three cameras 111, 112, 113 of identical solutions whose physical sighting axes are spaced apart by 120 °. Preferably, one of these cameras aims forward. Of course, the present invention is not limited to the embodiments described and shown, and the skilled person will be able to make many variations and modifications. It applies to drones of various types, inspection, leisure or other, hovering or not. It also applies to different types of virtual reality glasses, on-board or remote electronics.
权利要求:
Claims (9)
[0001]
REVENDICATIONS1. An immersion drone piloting system comprising: - a drone (100) equipped with shooting means (110, 121, 122) and - a ground station (200) comprising: virtual reality glasses rendering images taken using the means of shooting and transmitted from the drone by wireless communication means; means (220) for detecting changes in the orientation of the head of a wearer wearing the spectacles; and means (210, 230) for graphical processing on the ground capable of generating the restored images, the system being characterized in that it comprises: means (110, 121, 122, 130, 140) for generating on board the drone a first image called "point of view" image and a second image called "bird's eye" image whose field is wider and whose angular definition is lower than the image "point of view" ", and to transmit these images to the means of graphical processing on the ground; and means (151, 130) provided in the drone for modifying the line of sight of the "point of view" image in response to changes in position of the user's head detected by the detection means and transmitted to the drone via the wireless communication means (151, 252), and in that the ground graphic processing means (210, 230) are capable of generating locally during a movement of the user's head, by a combination of the current "point of view" and "bird's-eye" images (P1, P2) in the ground station, images to be restored (P'1-P'2, P "1-P" 2) of contours (CO ') adjusted according to the changes of position detected by the detection means.
[0002]
2. The system of claim 1, wherein the means (210, 230) for graphical processing on the ground are able to perform an inlay of the current "point of view" image (P1) in the "flight d" image. bird "current (P2) and to apply variable cropping to the image thus obtained. 3028767 15
[0003]
The system of claim 1, wherein the camera means comprises a set of wide field cameras of different viewing angles (110, 121, 122; 110-1-110n; 112; 111-113). 5
[0004]
The system of claim 3, wherein the camera means includes cameras of different definitions (110; 121,122) for the "viewpoint" images and "bird's-eye" images. 10
[0005]
The system of claim 3, wherein the camera means comprises a common set of cameras (110-1-110n) all having the same definition, and image generation circuits of different definitions (130, 140) from the common set of cameras. 15
[0006]
The system of claim 3, comprising a set of complementary field cameras (111, 112; 121-122; 111-113) covering all directions in a horizontal plane.
[0007]
The system of claim 4, comprising a first camera (110) whose line of sight is arranged along a major axis of the drone, and a plurality of cameras (121, 122) of lower definitions with axes of aiming to the left and to the right in relation to the main axis. 25
[0008]
8. The system of claim 7, wherein the lower definition cameras (121, 122) are complementary fields covering all directions in a horizontal plane.
[0009]
9. The system of claim 3, wherein at least some cameras have fisheye-type optics, and wherein there is provided means for correcting distortions generated by this type of optics.
类似技术:
公开号 | 公开日 | 专利标题
EP3025770B1|2017-01-25|Video system for piloting a drone in immersive mode
EP3086195B1|2019-02-20|System for piloting a drone in first-person view mode
EP3078402B1|2017-10-04|System for piloting an fpv drone
EP3048789B1|2016-12-28|Drone provided with a video camera and means to compensate for the artefacts produced at the greatest roll angles
KR101986329B1|2019-06-05|General purpose spherical capture methods
EP2933775B1|2016-12-28|Rotary-wing drone provided with a video camera supplying stabilised image sequences
EP3142353B1|2019-12-18|Drone with forward-looking camera in which the control parameters, especially autoexposure, are made independent of the attitude
JP2018522429A|2018-08-09|Capture and render panoramic virtual reality content
CN104781873A|2015-07-15|Image display device and image display method, mobile body device, image display system, and computer program
JP2015149634A|2015-08-20|Image display device and method
US20170286993A1|2017-10-05|Methods and Systems for Inserting Promotional Content into an Immersive Virtual Reality World
EP3273318B1|2021-07-14|Autonomous system for collecting moving images by a drone with target tracking and improved target positioning
FR3058238A1|2018-05-04|SELF-CONTAINING DRONE-DRIVED VIEWING SYSTEM WITH TARGET TRACKING AND TARGET SHIFTING ANGLE HOLDING.
EP3273317A1|2018-01-24|Autonomous system for taking moving images, comprising a drone and a ground station, and associated method
EP3011548B1|2020-07-29|Method for carrying out a real situation simulation test including generation of various virtual contexts
JP6899875B2|2021-07-07|Information processing device, video display system, information processing device control method, and program
WO2013160255A1|2013-10-31|Display device suitable for providing an extended field of vision
TWI436270B|2014-05-01|Telescopic observation method for virtual and augmented reality and apparatus thereof
EP3281870A1|2018-02-14|Method for capturing a video by a drone, related computer program and electronic system for capturing a video
EP3602253A1|2020-02-05|Transparency system for commonplace camera
JP2021061505A|2021-04-15|Imaging apparatus
FR3020168A1|2015-10-23|ROTATING WING DRONE WITH VIDEO CAMERA DELIVERING STABILIZED IMAGE SEQUENCES
FR3052678A1|2017-12-22|DRONE PROVIDED WITH A FRONTAL VIDEO CAMERA COMPRESSING THE INSTANTANEOUS ROTATIONS OF THE DRONE AND CORRECTION OF THE ARTIFACTS
同族专利:
公开号 | 公开日
US9747725B2|2017-08-29|
EP3025770B1|2017-01-25|
US20160148431A1|2016-05-26|
CN105763790A|2016-07-13|
FR3028767B1|2017-02-10|
EP3025770A1|2016-06-01|
JP2016119655A|2016-06-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US3564134A|1968-07-03|1971-02-16|Us Navy|Two-camera remote drone control|
WO1998046014A1|1997-04-07|1998-10-15|Interactive Pictures Corporation|Method and apparatus for inserting a high resolution image into a low resolution interactive image to produce a realistic immersive experience|
FR2922666A1|2007-10-17|2009-04-24|Aerodrones Sarl|Modular exploiting and managing system controlling method for e.g. drone, involves selecting navigation data to be considered for managing platform according to mission data derived from ground station by automation block|
JP2010152835A|2008-12-26|2010-07-08|Ihi Aerospace Co Ltd|Unmanned mobile body system|
EP2364757A1|2010-03-11|2011-09-14|Parrot|Method and device for remote control of a drone, in particular a rotary-wing drone|
EP2557468A2|2011-08-09|2013-02-13|Kabushiki Kaisha Topcon|Remote control system|
KR20050042399A|2003-11-03|2005-05-09|삼성전자주식회사|Apparatus and method for processing video data using gaze detection|
US8675068B2|2008-04-11|2014-03-18|Nearmap Australia Pty Ltd|Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features|
FR2938774A1|2008-11-27|2010-05-28|Parrot|DEVICE FOR CONTROLLING A DRONE|
CN101616310B|2009-07-17|2011-05-11|清华大学|Target image stabilizing method of binocular vision system with variable visual angle and resolution ratio|
US8184069B1|2011-06-20|2012-05-22|Google Inc.|Systems and methods for adaptive transmission of data|
US9347792B2|2011-10-31|2016-05-24|Honeywell International Inc.|Systems and methods for displaying images with multi-resolution integration|
GB2497119B|2011-12-01|2013-12-25|Sony Corp|Image processing system and method|
CN103955228A|2014-04-29|2014-07-30|西安交通大学|Aerial photography imaging and controlling device|KR20160125674A|2015-04-22|2016-11-01|엘지전자 주식회사|Mobile terminal and method for controlling the same|
WO2017017675A1|2015-07-28|2017-02-02|Margolin Joshua|Multi-rotor uav flight control method and system|
KR20170095030A|2016-02-12|2017-08-22|삼성전자주식회사|Scheme for supporting virtual reality content display in communication system|
US20170374276A1|2016-06-23|2017-12-28|Intel Corporation|Controlling capturing of a multimedia stream with user physical responses|
CN106339079A|2016-08-08|2017-01-18|清华大学深圳研究生院|Method and device for realizing virtual reality by using unmanned aerial vehicle based on computer vision|
JP6851470B2|2016-09-26|2021-03-31|エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd|Unmanned aerial vehicle control methods, head-mounted display glasses and systems|
CN106375669B|2016-09-30|2019-08-06|天津远度科技有限公司|A kind of digital image stabilization method, device and unmanned plane|
US11011140B2|2016-11-14|2021-05-18|Huawei Technologies Co., Ltd.|Image rendering method and apparatus, and VR device|
CN106708074B|2016-12-06|2020-10-30|深圳市元征科技股份有限公司|Method and device for controlling unmanned aerial vehicle based on VR glasses|
CN108234929A|2016-12-21|2018-06-29|昊翔电能运动科技有限公司|Image processing method and equipment in unmanned plane|
US10687050B2|2017-03-10|2020-06-16|Qualcomm Incorporated|Methods and systems of reducing latency in communication of image data between devices|
CN107065905A|2017-03-23|2017-08-18|东南大学|A kind of immersion unmanned aerial vehicle control system and its control method|
US11049219B2|2017-06-06|2021-06-29|Gopro, Inc.|Methods and apparatus for multi-encoder processing of high resolution content|
JP7000050B2|2017-06-29|2022-01-19|キヤノン株式会社|Imaging control device and its control method|
US11178377B2|2017-07-12|2021-11-16|Mediatek Singapore Pte. Ltd.|Methods and apparatus for spherical region presentation|
CN110869730A|2017-07-17|2020-03-06|重庆赛真达智能科技有限公司|Remote in-situ driving unmanned vehicle operation system and automatic driving automobile test field system|
CN108646776B|2018-06-20|2021-07-13|珠海金山网络游戏科技有限公司|Imaging system and method based on unmanned aerial vehicle|
CN108873898A|2018-06-26|2018-11-23|武汉理工大学|A kind of long-range Ride Control System of immersion and method based on real time data interaction|
CN109040555A|2018-08-22|2018-12-18|信利光电股份有限公司|A kind of display system and method for FPV image|
CN110944222B|2018-09-21|2021-02-12|上海交通大学|Method and system for immersive media content as user moves|
CN109151402A|2018-10-26|2019-01-04|深圳市道通智能航空技术有限公司|Image processing method, image processing system and the unmanned plane of aerial camera|
US11228781B2|2019-06-26|2022-01-18|Gopro, Inc.|Methods and apparatus for maximizing codec bandwidth in video applications|
法律状态:
2015-11-25| PLFP| Fee payment|Year of fee payment: 2 |
2016-05-27| PLSC| Publication of the preliminary search report|Effective date: 20160527 |
2016-11-11| TP| Transmission of property|Owner name: PARROT DRONES, FR Effective date: 20161010 |
2016-11-21| PLFP| Fee payment|Year of fee payment: 3 |
2017-11-08| PLFP| Fee payment|Year of fee payment: 4 |
优先权:
申请号 | 申请日 | 专利标题
FR1461501A|FR3028767B1|2014-11-26|2014-11-26|VIDEO SYSTEM FOR DRIVING A DRONE IN IMMERSIVE MODE|FR1461501A| FR3028767B1|2014-11-26|2014-11-26|VIDEO SYSTEM FOR DRIVING A DRONE IN IMMERSIVE MODE|
US14/923,307| US9747725B2|2014-11-26|2015-10-26|Video system for piloting a drone in immersive mode|
EP15194270.3A| EP3025770B1|2014-11-26|2015-11-12|Video system for piloting a drone in immersive mode|
JP2015229445A| JP2016119655A|2014-11-26|2015-11-25|Video system for piloting drone in immersive mode|
CN201511036062.9A| CN105763790A|2014-11-26|2015-11-25|Video System For Piloting Drone In Immersive Mode|
[返回顶部]